对话(ERC)任务中的情感识别旨在预测对话中话语的情感标签。由于说话者之间的依赖性是复杂而动态的,这包括言论和言论者间的依赖性,因此说话者特定信息的建模是ERC中的至关重要的作用。尽管现有的研究人员提出了各种说话者互动建模的方法,但他们不能共同探索动态的言论和言论者的依赖性,从而导致对上下文的理解不足并进一步阻碍情绪预测。为此,我们设计了一种新颖的扬声器建模方案,该方案以动态方式共同探索言论和言论者的依赖性。此外,我们为ERC提出了一个演讲者引导的编码编码器(SGED)框架,该框架完全利用了说话者信息来解码情感。我们使用不同的现有方法作为我们框架的对话上下文编码器,显示了提出的框架的高扩展性和灵活性。实验结果证明了SGED的优势和有效性。
translated by 谷歌翻译
情绪原因对提取(ECPE)任务旨在从文档中提取情绪和原因。我们观察到,在典型的ECPE数据集中,情绪和原因的相对距离分布极为不平衡。现有方法设置了一个固定的大小窗口,以捕获相邻子句之间的关系。但是,他们忽略了遥远条款之间的有效语义联系,从而导致对位置不敏感数据的概括能力差。为了减轻问题,我们提出了一种新型的多晶格语义意识图模型(MGSAG),以共同结合细粒度和粗粒语义特征,而无需距离限制。特别是,我们首先探讨从子句和从文档中提取的关键字之间的语义依赖性,这些文档传达了细颗粒的语义特征,从而获得了关键字增强子句表示。此外,还建立了子句图,以模拟条款之间的粗粒语义关系。实验结果表明,MGSAG超过了现有的最新ECPE模型。特别是,MGSAG在不敏感数据的条件下大大优于其他模型。
translated by 谷歌翻译
Learning semantic-rich representations from raw unlabeled time series data is critical for downstream tasks such as classification and forecasting. Contrastive learning has recently shown its promising representation learning capability in the absence of expert annotations. However, existing contrastive approaches generally treat each instance independently, which leads to false negative pairs that share the same semantics. To tackle this problem, we propose MHCCL, a Masked Hierarchical Cluster-wise Contrastive Learning model, which exploits semantic information obtained from the hierarchical structure consisting of multiple latent partitions for multivariate time series. Motivated by the observation that fine-grained clustering preserves higher purity while coarse-grained one reflects higher-level semantics, we propose a novel downward masking strategy to filter out fake negatives and supplement positives by incorporating the multi-granularity information from the clustering hierarchy. In addition, a novel upward masking strategy is designed in MHCCL to remove outliers of clusters at each partition to refine prototypes, which helps speed up the hierarchical clustering process and improves the clustering quality. We conduct experimental evaluations on seven widely-used multivariate time series datasets. The results demonstrate the superiority of MHCCL over the state-of-the-art approaches for unsupervised time series representation learning.
translated by 谷歌翻译
大多数空中操纵器都使用串行刚性链接设计,在操纵过程中启动接触时会导致大力,并可能导致飞行稳定性难度。连续操作器的遵守情况可能会改善这种限制。为了实现这一目标,我们介绍了空中无人机的紧凑,轻巧和模块化电缆驱动的连续操作的新颖设计。然后,我们为其运动学,静电和刚度(合规性)得出一个完整的建模框架。该框架对于将操纵器集成到空中无人机至关重要。最后,我们报告了硬件原型的初步实验验证,从而提供了有关其操纵可行性的见解。未来的工作包括对拟议的连续操作机与空中无人机的集成和测试。
translated by 谷歌翻译
由于高速互联网访问的要求增加,WiFi技术已应用于各个地方。最近,除了网络服务之外,WiFi Sensing在智能家居中还具有吸引力,因为它是无设备,具有成本效益和隐私性的。尽管已经开发了许多WiFi传感方法,但其中大多数仅考虑单个智能家庭场景。没有强大的云服务器和大量用户的连接,大规模的WiFi感应仍然很困难。在本文中,我们首先分析和总结了这些障碍,并提出了一个有效的大规模WiFi传感框架,即有效的障碍。 EfficityFI与中心服务器处的WiFi APS和云计算一起使用Edge Computing。它由一个新颖的深神经网络组成,该网络可以在Edge处压缩细粒的WiFi通道状态信息(CSI),在云中恢复CSI,并同时执行感应任务。量化的自动编码器和联合分类器旨在以端到端的方式实现这些目标。据我们所知,EfficityFi是第一个启用IoT-Cloud WiFi传感框架,可大大减少开销的交流,同时准确地实现感应任务。我们通过WiFi传感利用人类活动识别和鉴定为两个案例研究,并进行了广泛的实验以评估有效性。结果表明,它将CSI数据从1.368MB/s压缩至0.768kb/s,数据重建的误差极低,并且可以达到超过98%的人类活动识别精度。
translated by 谷歌翻译
传统中药(TCM)是一种自然,安全且有效的疗法,已在全球范围内传播和应用。独特的TCM诊断和治疗系统需要对隐藏在自由文本编写的临床记录中的患者症状进行全面分析。先前的研究表明,该系统可以在人工智能(AI)技术(例如自然语言处理(NLP))的帮助下进行通知和智能。但是,现有数据集没有足够的质量或数量来支持TCM中数据驱动的AI技术的进一步开发。因此,在本文中,我们专注于TCM诊断和治疗系统的核心任务 - 综合征分化(SD) - 我们介绍了第一个用于SD的公共大型数据集,称为TCM-SD。我们的数据集包含54,152个现实世界临床记录,涵盖148个综合征。此外,我们在TCM领域收集了一个大规模的未标记文本语料库,并提出了一种特定领域的预训练的语言模型,称为Zy-Bert。我们使用深层神经网络进行了实验,以建立强大的性能基线,揭示了SD中的各种挑战,并证明了特定领域的预训练性语言模型的潜力。我们的研究和分析揭示了将计算机科学和语言学知识纳入探索TCM理论的经验有效性的机会。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译